base layer
The 4 Things You Need for a Tech Bubble
On this episode of, guest Brian Merchant walks us through a historical framework he used to analyze whether AI fits the classic signs of an economic bubble--and what that means for all of us. Chatter about an AI bubble has been everywhere lately, and top tech companies like Google, Meta, and Microsoft have doubled down on their AI investments for 2026. But how have analysts in the past accurately identified forming tech bubbles? Hosts Michael Calore and Lauren Goode sit down with Brian Merchant, WIRED contributor and author of the newsletter to break down the four criteria some researchers have used in the past to understand and brace for the worst. Please help us improve by filling out our listener survey . Write to us at uncannyvalley@wired.com . You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link . Hey Lauren, how are you doing? It's earnings season, so a lot of us on the business desk here at WIRED have been tuning into tech companies earnings reports and their earnings calls. And I guess that basically means it's CapEx season. Now that I'm a business desk reporter, I say CapEx. I throw it around at parties. But we are seeing a trend in how tech companies are sleeping on piles of money, but they aren't just sleeping on it. They're sharing big plans to spend on it, and especially to spend on AI infrastructure. And this is all partly what is fueling all of this talk about a bubble, which we touched on a little bit a couple of weeks ago with our colleague Molly Taft.
- North America > United States > New York (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > Texas (0.04)
- (3 more...)
- Information Technology (1.00)
- Banking & Finance > Trading (0.93)
Benchmarking Resilience and Sensitivity of Polyurethane-Based Vision-Based Tactile Sensors
Davis, Benjamin, Stuart, Hannah
Vision-based tactile sensors (VBTSs) are a promising technology for robots, providing them with dense signals that can be translated into an understanding of normal and shear load, contact region, texture classification, and more. However, existing VBTS tactile surfaces make use of silicone gels, which provide high sensitivity but easily deteriorate from loading and surface wear. We propose that polyurethane rubber, used for high-load applications like shoe soles, rubber wheels, and industrial gaskets, may provide improved physical gel resilience, potentially at the cost of sensitivity. To compare the resilience and sensitivity of silicone and polyurethane VBTS gels, we propose a series of standard evaluation benchmarking protocols. Our resilience tests assess sensor durability across normal loading, shear loading, and abrasion. For sensitivity, we introduce model-free assessments of force and spatial sensitivity to directly measure the physical capabilities of each gel without effects introduced from data and model quality. Finally, we include a bottle cap loosening and tightening demonstration as an example where polyurethane gels provide an advantage over their silicone counterparts.
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > Mexico > Quintana Roo > Cancún (0.04)
- Europe > Switzerland (0.04)
How to Train Your Metamorphic Deep Neural Network
Sommariva, Thomas, Calderara, Simone, Porrello, Angelo
Neural Metamorphosis (NeuMeta) is a recent paradigm for generating neural networks of varying width and depth. Based on Implicit Neural Representation (INR), NeuMeta learns a continuous weight manifold, enabling the direct generation of compressed models, including those with configurations not seen during training. While promising, the original formulation of NeuMeta proves effective only for the final layers of the undelying model, limiting its broader applicability. In this work, we propose a training algorithm that extends the capabilities of NeuMeta to enable full-network metamorphosis with minimal accuracy degradation. Our approach follows a structured recipe comprising block-wise incremental training, INR initialization, and strategies for replacing batch normalization. The resulting metamorphic networks maintain competitive accuracy across a wide range of compression ratios, offering a scalable solution for adaptable and efficient deployment of deep models.
Progressive Depth Up-scaling via Optimal Transport
Cao, Mingzi, Wang, Xi, Aletras, Nikolaos
Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average weights from base layers, neglecting neuron permutation differences. This limitation can potentially cause misalignment that harms performance. Inspired by applying Optimal Transport (OT) for neuron alignment, we propose Optimal Transport Depth Up-Scaling (OpT -DeUS). OpT -DeUS aligns and fuses Transformer blocks in adjacent base layers via OT for new layer creation, to mitigate neuron permutation mismatch between layers. OpT -DeUS achieves better overall performance and offers improved training efficiency than existing methods for continual pre-training and supervised fine-tuning across different model sizes. To further evaluate the impact of interpolation positions, our extensive analysis shows that inserting new layers closer to the top results in higher training efficiency due to shorter back-propagation time while obtaining additional performance gains.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Austria > Vienna (0.14)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- (6 more...)
Structured Pneumatic Fingerpads for Actively Tunable Grip Friction
Allison, Katherine, Kelly, Jonathan, Hatton, Benjamin
Grip surfaces with tunable friction can actively modify contact conditions, enabling transitions between higher- and lower-friction states for grasp adjustment. Friction can be increased to grip securely and then decreased to gently release (e.g., for handovers) or manipulate in-hand. Recent friction-tuning surface designs using soft pneumatic chambers show good control over grip friction; however, most require complex fabrication processes and/or custom gripper hardware. We present a practical structured fingerpad design for friction tuning that uses less than \$1 USD of materials, takes only seconds to repair, and is easily adapted to existing grippers. Our design uses surface morphology changes to tune friction. The fingerpad is actuated by pressurizing its internal chambers, thereby deflecting its flexible grip surface out from or into these chambers. We characterize the friction-tuning capabilities of our design by measuring the shear force required to pull an object from a gripper equipped with two independently actuated fingerpads. Our results show that varying actuation pressure and timing changes the magnitude of friction forces on a gripped object by up to a factor of 2.8. We demonstrate additional features including macro-scale interlocking behaviour and pressure-based object detection.
pMixFed: Efficient Personalized Federated Learning through Adaptive Layer-Wise Mixup
Saadati, Yasaman, Rostami, Mohammad, Amini, M. Hadi
Traditional Federated Learning (FL) methods encounter significant challenges when dealing with heterogeneous data and providing personalized solutions for non-IID scenarios. Personalized Federated Learning (PFL) approaches aim to address these issues by balancing generalization and personalization, often through parameter decoupling or partial models that freeze some neural network layers for personalization while aggregating other layers globally. However, existing methods still face challenges of global-local model discrepancy, client drift, and catastrophic forgetting, which degrade model accuracy. To overcome these limitations, we propose pMixFed, a dynamic, layer-wise PFL approach that integrates mixup between shared global and personalized local models. Our method introduces an adaptive strategy for partitioning between personalized and shared layers, a gradual transition of personalization degree to enhance local client adaptation, improved generalization across clients, and a novel aggregation mechanism to mitigate catastrophic forgetting. Extensive experiments demonstrate that pMixFed outperforms state-of-the-art PFL methods, showing faster model training, increased robustness, and improved handling of data heterogeneity under different heterogeneous settings.
- North America > United States > California (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > South Carolina (0.04)
- (2 more...)
- Education > Educational Setting (0.46)
- Government > Regional Government (0.46)
Optimizing Personalized Federated Learning through Adaptive Layer-Wise Learning
Chen, Weihang, Ren, Jie, Li, Zhiqiang, Gao, Ling, Wang, Zheng
Real-life deployment of federated Learning (FL) often faces non-IID data, which leads to poor accuracy and slow convergence. Personalized FL (pFL) tackles these issues by tailoring local models to individual data sources and using weighted aggregation methods for client-specific learning. However, existing pFL methods often fail to provide each local model with global knowledge on demand while maintaining low computational overhead. Additionally, local models tend to over-personalize their data during the training process, potentially dropping previously acquired global information. We propose FLAYER, a novel layer-wise learning method for pFL that optimizes local model personalization performance. FLAYER considers the different roles and learning abilities of neural network layers of individual local models. It incorporates global information for each local model as needed to initialize the local model cost-effectively. It then dynamically adjusts learning rates for each layer during local training, optimizing the personalized learning process for each local model while preserving global knowledge. Additionally, to enhance global representation in pFL, FLAYER selectively uploads parameters for global aggregation in a layer-wise manner. We evaluate FLAYER on four representative datasets in computer vision and natural language processing domains. Compared to six state-of-the-art pFL methods, FLAYER improves the inference accuracy, on average, by 7.21\% (up to 14.29\%).
- Asia > China (0.04)
- Europe > United Kingdom > England > West Yorkshire > Leeds (0.04)
21 Best Cyber Monday Clothing Deals on WIRED-Specific Fashion Finds
Life as a WIRED Gear writer comes with certain demands. Many of us are into outdoor activities, like climbing, running, skiing, camping, or biking. But these sports are expensive, and you need specialized clothing in order to not freeze to death. That's not including all the time we spend shuffling boxes around and forcing our spouses to help us unbox TVs and move around treadmills and pizza ovens. Do you, too, want people to look at you and think, "That person spends most of their time swinging kettle bells and watching videos of self-driving cars?" Thanks to Cyber Monday, you can.
- Health & Medicine > Consumer Health (0.67)
- Retail > Online (0.62)
Personalized Quantum Federated Learning for Privacy Image Classification
Shi, Jinjing, Chen, Tian, Zhang, Shichao, Li, Xuelong
Quantum federated learning has brought about the improvement of privacy image classification, while the lack of personality of the client model may contribute to the suboptimal of quantum federated learning. A personalized quantum federated learning algorithm for privacy image classification is proposed to enhance the personality of the client model in the case of an imbalanced distribution of images. First, a personalized quantum federated learning model is constructed, in which a personalized layer is set for the client model to maintain the personalized parameters. Second, a personalized quantum federated learning algorithm is introduced to secure the information exchanged between the client and server.Third, the personalized federated learning is applied to image classification on the FashionMNIST dataset, and the experimental results indicate that the personalized quantum federated learning algorithm can obtain global and local models with excellent performance, even in situations where local training samples are imbalanced. The server's accuracy is 100% with 8 clients and a distribution parameter of 100, outperforming the non-personalized model by 7%. The average client accuracy is 2.9% higher than that of the non-personalized model with 2 clients and a distribution parameter of 1. Compared to previous quantum federated learning algorithms, the proposed personalized quantum federated learning algorithm eliminates the need for additional local training while safeguarding both model and data privacy.It may facilitate broader adoption and application of quantum technologies, and pave the way for more secure, scalable, and efficient quantum distribute machine learning solutions.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)
Point Cloud Geometry Scalable Coding with a Quality-Conditioned Latents Probability Estimator
Mari, Daniele, Guarda, André F. R., Rodrigues, Nuno M. M., Milani, Simone, Pereira, Fernando
The widespread usage of point clouds (PC) for immersive visual applications has resulted in the use of very heterogeneous receiving conditions and devices, notably in terms of network, hardware, and display capabilities. In this scenario, quality scalability, i.e., the ability to reconstruct a signal at different qualities by progressively decoding a single bitstream, is a major requirement that has yet to be conveniently addressed, notably in most learning-based PC coding solutions. This paper proposes a quality scalability scheme, named Scalable Quality Hyperprior (SQH), adaptable to learning-based static point cloud geometry codecs, which uses a Quality-conditioned Latents Probability Estimator (QuLPE) to decode a high-quality version of a PC learning-based representation, based on an available lower quality base layer. SQH is integrated in the future JPEG PC coding standard, allowing to create a layered bitstream that can be used to progressively decode the PC geometry with increasing quality and fidelity. Experimental results show that SQH offers the quality scalability feature with very limited or no compression performance penalty at all when compared with the corresponding non-scalable solution, thus preserving the significant compression gains over other state-of-the-art PC codecs.
- Europe > Portugal > Lisbon > Lisbon (0.14)
- Europe > Portugal > Guarda > Guarda (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (8 more...)